![]() Container Auto-dimensioning
专利摘要:
Embodiments of the present invention generally relate to the space dimensioning field. In one embodiment, the present invention is directed to a method for dimensioning a container, wherein the space is 3D and definable via height, width, and depth coordinates. The method comprises: obtaining a 3D image of at least a portion of the space; analyzing the image to determine a first and a second equation defining first face and second faces associated with a first and second wall defining the container; solving the first equation for a first coordinate value; solving the second equation for a second coordinate value, wherein the first coordinate value and the second coordinate value are one of a width coordinate or a height coordinate; and calculating a first distance based at least in part on the first coordinate value and the second coordinate value. 公开号:BE1025930B1 申请号:E201805903 申请日:2018-12-18 公开日:2019-11-27 发明作者:Adithya H Krishnamurthy;Justin F Barish 申请人:Symbol Technologies Llc; IPC主号:
专利说明:
Container Auto-dimensioning BACKGROUND Goods can be transported in different ways using different methods. In particular, long-distance transportation often uses containers that can be loaded with goods and then moved by vehicles, trains, vessels or aircraft to their desired destinations. Although releasable containers are not always used, short-distance transport similarly uses vehicles such as delivery trucks / lorries to which containers are attached for storage of items and cargo. In the past, most loading or unloading was performed without significant contribution from computerized systems. However, with the development of calculation options, the availability of measured environmental data and the ever-increasing focus on efficiency, the current loading and unloading procedures are followed, guided and / or assisted by computer platforms that can act on information immediately. One of the parameters that can advantageously be used by such computer platforms is the size of a container. Recognizing the size of a container can help with, for example, monitoring of being filled, preventing the loading of oversized cargo, preventing the placement of oversized loading equipment in the container, more accurate planning of the loading and / or unloading process and so on. Considering how container dimensions vary not only between different types of containers (for example, trailers designed to be coupled and towed by trailers and delivery trucks) BE2018 / 5903 but for the same type of container (e.g. different sized delivery trucks, different sized trailers, etc.) there is a need for improved, automated means for detecting and reporting dimensions of a container. In addition, there is a need to optimize such resources to work efficiently. SUMMARY OF THE INVENTION According to an aspect of the invention, a method is provided for dimensioning a space bounded by at least a first wall and a second wall opposite the first wall, the space being three-dimensional and definable via height, width and depth coordinates, the method comprising the obtaining, by an image recording device, a three-dimensional image of at least a portion of the space, analyzing the three-dimensional image to determine a first equation defining a first plane corresponding to the first wall and to determine a second equation defining a second plane corresponding to the second wall, solving the first equation for a first coordinate value, solving the second equation for a second coordinate value, the first coordinate value and the second coordinate value being one of a width coordinate or a height coordinate and calculating a first distance based at least in part on the first coordinate value and the second coordinate value. The first wall can for example be parallel to the second wall. The processing of analyzing the three-dimensional image may preferably include performing a sample consensus (SAC) segmentation analysis. The first equation can be expressed as Ax + By + Cz = D, the second equation can be expressed as Ex + Fy + Gz = H and x can BE2018 / 5903 corresponds to a width coordinate value, y can correspond to a height coordinate value and z can correspond to a depth coordinate value. The processing of solving the first equation and the processing of solving the second equation may include selecting a z value corresponding to a distance not greater than 3 meters from an image pickup device used to obtain the three-dimensional image along a respective axis. The operation of calculating the first distance may include subtracting the first coordinate value from the second coordinate value. The three-dimensional image can comprise a point cloud. The method may further include correcting for distortion caused by an image pickup device used to obtain the three-dimensional image. The method may further comprise solving the first equation for a third coordinate value, solving the second equation to obtain a fourth coordinate value, the third coordinate value and the fourth coordinate value being width coordinate or height coordinate coordinate as the first coordinate value and the second coordinate value, calculating a second distance between the first wall and the second wall based at least in part on the third coordinate value and the fourth coordinate value and calculating a third distance based at least in part on the first distance and the second distance. The first distance may, for example, correspond to a separation between the first wall and the second wall. The space can preferably be an inner space of a container that has an opening. BE2018 / 5903 The operation of obtaining the three-dimensional image may include attaching a three-dimensional image recording device near the aperture. According to a further aspect of the invention, a method is provided for dimensioning a height of a space bounded by at least one floor, a first raised wall and a second raised wall, the space being three-dimensional and definable via height, width and depth. coordinates, the method comprising obtaining a three-dimensional image of at least a portion of the space, the three-dimensional image comprising three-dimensional point data, analyzing the three-dimensional image to determine a first equation defining a first plane corresponding to the floor , solving the first equation for a first height coordinate value, determining a second height coordinate value selected from a first multiple number of largest height coordinate values associated with the first upstanding wall, determining a third height coordinate value that is selected u it is a second multiple number of largest height coordinate values associated with the second raised wall and calculating a first distance based at least in part on the first height coordinate value and a lowest of the second height coordinate value and the third height coordinate value. For example, the first equation can be expressed as Ax + By + Cz = D and the processing of solving the first equation can include solving for y by selecting an x value and a z value corresponding to respective distances no more than 3 meters from an image pick-up device used to obtain the three-dimensional image along respective axis. The first wall can preferably be opposite and parallel to the second wall. BE2018 / 5903 The space can be further limited by a ceiling and the three-dimensional point data can omit data associated with the ceiling. The method may further include applying a noise filter to the three-dimensional image prior to the operations of determining the second height coordinate value and determining the third height coordinate value. The processing of analyzing the three-dimensional image may include, for example, performing sample consensus (SAC) segmentation analysis. According to a further aspect of the invention, a method is provided for dimensioning a depth of space bounded by at least one floor, a first raised wall, a second raised wall opposite and parallel to the first raised wall and a third raised wall normally on the first upstanding wall and the second upstanding wall, wherein the space is three-dimensional and definable via height, width and depth coordinates, the method comprising obtaining a three-dimensional image of at least a portion of the space, the three-dimensional image having points with comprises three-dimensional point data, obtaining a two-dimensional image of the at least one portion of the space, the two-dimensional image comprising pixels with pixel data, at least some of the points corresponding to the pixels and performing dimensional analysis on a filtered portion of the two-dimensional image, the filtered g Each portion comprises at least some of the pixels that do not correspond to at least some of the points. The method may preferably further comprise superimposing the three-dimensional image over the two-dimensional image to obtain the filtered portion of the two-dimensional image, the filtered portion of the two-dimensional image forming a section of the BE2018 / 5903 is a two-dimensional image over which points of the three-dimensional image are laid. The operation of performing dimensional analysis may include, for example, comparing the filtered portion of the two-dimensional image with a plurality of previously existing images of spaces that each have a known depth. The space can preferably be an inner space of a container that has an opening and the operations of obtaining the three-dimensional image and obtaining the two-dimensional image can include attaching an image-recording device near the opening, the image-recording device is arranged to take the three-dimensional image and the two-dimensional image from essentially the same point of view. The operation of performing dimensional analysis may include analyzing the filtered portion of the two-dimensional image with at least some three-dimensional point data of the three-dimensional image. BRIEF DESCRIPTION OF VIEWS OF THE FIGURES The accompanying figures, wherein like reference numerals refer to identical or functionally similar elements in the individual views, together with the figure description below, are incorporated into and form part of the description and serve to further illustrate and illustrate embodiments of concepts comprising the claimed invention. explain various principles and advantages of these embodiments. This shows FIG. 1 a loading facility in accordance with an embodiment of the invention, FIG. 2 an inside of the loading facility of FIG. 1 BE2018 / 5903 FIG. 3 a trailer monitoring unit in accordance with an embodiment of the invention, FIG. 4A is a top view of the loading facility of FIG. 1 showing an example field of view of a container monitoring unit, FIG. 4B is a side view of the loading facility of FIG. 1 showing an example field of view of a container monitoring unit, FIG. 5 is a schematic exemplary block diagram of a communication network implemented in the facility of FIG. 1 FIG. 6 is a flow chart representative of a method for auto-dimensioning a trailer in accordance with an embodiment of the invention, FIGs. 7A and 7B show 2D and 3D images of a container as recorded by an image pickup device in accordance with an embodiment of the invention, FIG. 8 is a flow chart representative of a method for auto-dimensioning a trailer in accordance with the invention, FIG. 9 is a flow chart representative of a method for auto-dimensioning a trailer in accordance with an embodiment of the invention, FIG. 10 an example of a filtered image of the image of FIG. 7A. Elements in the figures are shown for simplicity and clarity and are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve the understanding of embodiments of the present invention. BE2018 / 5903 The device and method components are represented, where appropriate, by conventional symbols in the figures, showing only those specific details relevant to the understanding of the embodiments of the present invention so as not to obscure the description with details that are easily clear in themselves. DETAILED DESCRIPTION OF THE INVENTION In this document, the term “container” will refer to any container that is transportable by a vehicle, a train, a vessel and / or an aircraft, and that is designed to store transportable goods such as packed and / or unpacked. items and / or other types of cargo. Accordingly, an example of a container includes an enclosed container that is non-removably secured to a platform with wheels and a hook for towing by a powered vehicle. An example of a container also includes an enclosed container that is removably secured to a platform with wheels and a tow bar for towing by a powered vehicle. An example of a container also includes a casing that is non-removably attached to a frame of a powered vehicle, such as may be the case with a delivery truck, box truck, etc. Although the exemplary embodiments described below may seem to refer to one type container, the scope of the invention extends to other types of containers as defined above. In one embodiment the invention relates to a method for dimensioning a space (for example a container) which is bounded by at least a first wall and a second wall opposite the first wall, the space being three-dimensional BE2018 / 5903 definable via height, width and depth coordinates. The method comprises: acquiring, by an image recording device, a three-dimensional image of at least a portion of the space; analyzing the three-dimensional image to determine a first equation that defines a first plane corresponding to the first wall and to determine a second equation that defines a second plane corresponding to the second wall; solving the first equation for a first coordinate value; solving the second equation for a second coordinate value, wherein the first coordinate value and the second coordinate value are one of a width coordinate or a height coordinate; and calculating a first distance at least in part based on the first coordinate value and the second coordinate value. In another embodiment the invention relates to a method for dimensioning a height of a space (for example a container) bounded by at least one floor, a first raised wall and a second raised wall, the space being three-dimensional and definable via height, width and depth coordinates. The method comprises: obtaining a three-dimensional image of at least a portion of the space, the three-dimensional image comprising three-dimensional point data; analyzing the three-dimensional image to determine a first equation that defines a first plane corresponding to the floor; solving the first equation for a first height coordinate value; determining a second height coordinate value selected from a first multiple number of largest height coordinate values associated with the first raised wall; determining a third height coordinate value selected from a second multiple number of largest height coordinate values associated with the second raised wall; and calculating a first distance BE2018 / 5903 at least partly on the basis of the first height coordinate value and a lowest of the second height coordinate value and the third height coordinate value. In yet another embodiment the invention relates to a method for dimensioning a depth of a space (e.g. a container) bounded by at least one floor, a first raised wall, a second raised wall opposite and parallel to the first raised wall and a third raised wall normally on the first raised wall and the second raised wall, the space being three-dimensional and definable via height, width and depth coordinates. The method comprises: obtaining a three-dimensional image of at least a portion of the space, the three-dimensional image comprising points with three-dimensional point data; obtaining a two-dimensional image of the at least one portion of the space, the two-dimensional image comprising pixels with pixel data, at least some of the points corresponding to some of the pixels; and performing dimensional analysis on a filtered portion of the two-dimensional image, the filtered portion including at least some of the pixels not associated with at least some of the points. With reference to the drawing, FIG. 1 an exemplary environment where embodiments of the present invention can be implemented. In the present example, the environment is provided in a form of a loading bay 100 (also referred to as a loading facility) where containers 102 are loaded with several goods and / or where several goods are unloaded from the containers 102. The loading dock 100 comprises a facility 104 which has a plurality of loading compartments 106.l-106.n facing a loading facility lot 108 where vehicles such as trailers (not shown), delivery and collection containers 102 can be placed. To BE2018 / 5903, each container 102 faces the facility 104 with its back so that it is generally transverse to the wall comprising the loading compartments 106 and in line with one of the loading compartments (in this case 106.3) . As illustrated, each loading compartment 106 includes a compartment door 110 that can be lowered to close or lift the loading compartment 106 to open the loading compartment, thereby allowing the inside of the facility 104 to be accessed therethrough. Additionally, each loading compartment 106 is provided with a container monitoring unit (CMU) 112. The CMU is mounted near the container loading area, preferably in the upper portion of the loading compartment 106 outside the door 110 facing the loading facility lot 108 or an inside / rear of a container 102 if one is linked to the loading compartment. To protect the CMU from inclement weather it could be mounted under a tarpaulin cover 114. Once coupled, goods can be loaded to / unloaded from the container 102 with the CMU 112 keeping a view of the rear / inside of the container. FIG. 2 is an exemplary perspective view of the loading facility 104 of FIG. 1 as viewed from the inside, showing a container 102 that is coupled at a loading compartment 106.3 with an open container door and a container 116 that is coupled at a loading compartment 106.2 with a closed container door 118. To aid the status of the container door the CMU 112 is used as further described below. In the embodiment described here and as shown in FIG. 3, the CMU 112 is an attachable device that includes a 3D depth camera 120 for recording 3D (three-dimensional) images (e.g., 3D image data comprising a plurality of points with three-dimensional point data) and a 2D camera 122 for recording 2D images ( for example 2D image data). The 2D camera can be an RGB (red, green, blue) camera for recording 2D images. The CMU 112 BE2018 / 5903 can also include one or more processors and one or more computer memories for storing image data and / or for executing application / instructions that perform analysis or other functions as described herein. The CMU 112 may comprise, for example, flash memory for determining, storing or otherwise processing the image data and / or post-scan data. In addition, CMU 112 may further comprise a network interface to enable communication with other devices (such as server 130). The network interface of CMU 112 may include any suitable type of communication interface (s) (e.g., wired and / or wireless interfaces) that are arranged to operate in accordance with a suitable protocol. In various embodiments and as shown in FIGs. 1 and 2, the CMU 112 is mounted via a mounting bracket 124 and oriented toward the coupled containers to record 3D and / or 2D image data from the inside and outside thereof. In one embodiment, the 3D depth camera 120 comprises, for recording 3D image data, an infrared (IR) projector and a related IR camera. The IR projector projects a pattern of IR light or beams onto an object or surface, which may include surfaces of the container 102 (such as the door, walls, floor, etc.), objects inside the container (such as boxes, packages , temporary shipping requirements, etc.), and / or surfaces of the loading facility lot 108 (such as the surface of the loading facility lot on which the containers are parked). The IR light or bundles can be distributed over the object or surface in a pattern of dots or dots by the IR projector, which can be measured or scanned by an IR camera. A depth detection application, such as a depth detection application performed on one or more processors or memories of CMU 112, can determine various depth values based on the pattern of dots or dots, for example depth values of the inside of the container 102 A near-depth BE2018 / 5903 object (for example, near boxes, packages, etc.) can be determined, for example, where the dots or points are close to each other, and far-depth objects (for example, far boxes, packages, etc.) can be determined where the points are more spread out. to be. The various depth values can be used by the depth detection application and / or CMU 112 to generate a depth map. The depth map can represent a 3D image of, or contain 3D image data from, the objects or surfaces measured or scanned by the 3D depth camera 120. Additionally, in an embodiment to record 2D image data, the 2D camera 122 includes an RGB (red, green, blue) based camera for recording 2D images that have RGB-based pixel data. In some embodiments, the 2D camera 122 records 2D images and related 2D image data at the same or similar point in time as the 3D depth camera 120 such that the CMU 112 may have both sets of 3D image data and 2D image data available for a specific surface, object or scene at the same or comparable time. Referring to FIGs. 4A and 4B, the CMU can be oriented such that the fields of view (FOV) 126 for the 3D camera and the 2D camera diverge to accommodate a majority of the inside of the container 116.2, 116.3 or 116.4. Additionally, both FOVs can overlap considerably to record data across essentially the same area. Consequently, the CMU 112 can scan, measure, or otherwise record image data from the walls, floor, ceiling, packages or other objects or surfaces within the container to determine the 3D and 2D image data. Similarly, when a container is absent from the loading bay, the CMU can scan, measure, or otherwise record image data from the loading facility lot 108 surface to determine the 3D and 2D image data. The image data can be processed by the one or more processors and / or memories of the CMU 112 (or in some embodiments one or more remote processors and / or memories of a server) to BE2018 / 5903 analysis, functions, such as graphic or image analysis, as described by the one or more several flowcharts, block diagrams, methods, functions or various embodiments herein. In some embodiments, the CMU 112 processes the 3D and 2D image data for use by other devices (e.g., client device 128 (which may be in the form of a mobile device, such as a tablet, smartphone, laptop, or other such mobile computer system) or server 130 (which may be in the form of a single or multiple computers that operate to control access to a centralized resource or service in a network)). Processing the image data may generate post-scan data that includes metadata, simplified data, normalized data, result data, status data, or alertness data as determined from the originally scanned or measured data. As shown in FIG. 5, which illustrates a block connection diagram between the CMU 112, server 130, and client device 128, these devices can be connected via a suitable communication means, including wired and / or wireless connection components that can implement one or more communication protocol standards, such as, for example, TCP / IP IP, WiFi (802.11b), Bluetooth, Ethernet, or any other suitable communication protocol or standard. In some embodiments, the server 130 may be located at the same loading facility 104. In other embodiments, the server 130 may be located at a remote location, such as on a cloud platform or other remote location. In still other embodiments, the server 130 may be formed from a combination of local and cloud-based computers. Server 130 is arranged to execute computer instructions to perform operations associated with the systems and methods described herein. The server 130 can provide enterprise service BE2018 / 5903 implement software that includes for example RESTful (representative state transfer) API services, message queuing service and event services that can be provided by various platforms or specifications, such as the J2EE specification implemented by the Oracle WebLogic Server platform, the JBoss platform or the IBM WebSphere platform, etc. Other technologies or platforms such as Ruby on Rails, Microsoft .NET, or similar can also be used. To assist with dimensioning containers, the above components can be used, alone or in combination, to detect and / or provide various measurements from the inside of the container coupled to a loading bay. Furthermore, reference is made to FIG. 6, showing a flow chart representative of a method for dimensioning a space bounded by at least a first wall and a second wall opposite the first wall, the space being three-dimensional and definable via height, width, and depth coordinates . At step 200, the method includes processing an image pickup device to obtain a three-dimensional image of at least a portion of the space. The image recording device can be implemented via the CMU 112 which is arranged for recording 3D images. In the case of dimensioning an inside of a container at a loading facility, it is preferable to orient the image pickup device so that its 3D FOV extends over the area of the loading facility lot and more specifically over the area where a container (such as trailer 102) ) is expected to be positioned during loading and unloading procedures. This device allows the image capture device to measure the presence or absence (by recording and analyzing 3D data) of various objects in the vicinity of its FOV and to make various determinations based on them. BE2018 / 5903 Next, the method at step 202 includes the operation of analyzing the three-dimensional image to determine a first equation that defines a first plane corresponding to the first wall and to determine a second equation that defines a second plane corresponding to the second wall. Referring to FIGs. 7A and 7B, showing 2D and 3D representations of an interior 300 of a loading container as received by a CMU image pickup device, respectively, the surface of a first wall 302 is represented by the set of points 304 and the surface corresponding to a second wall 306 is represented by the set of points 308. Knowing that the analysis must focus on a detection of a plane (i.e., a substantially flat surface), one can use 3D image segmentation analysis. In some embodiments, sample consensus (SAC) segmentation analysis can be used to determine points in the 3D image that correspond to different planes or surfaces. This can be applied to a wide variety of surfaces, including inside and outside surfaces of the trailer (for example, internal walls, floor, ceiling and external surfaces such as the outside of the door) and also surfaces of objects placed within the trailer itself. SAC segmentation analysis determines or segments the various planes or surfaces of the environment into x, y, z coordinate planes by identifying a correlation of common points along x, y, z planes oriented in the 3D image data. As such, this method can be used to analyze a certain multiple number of points within the 3D image and to identify a presence of a plane corresponding to a substantially flat surface and to define that plane / substantially flat surface by comparing the shape Ax + By + Cz = D. In addition, one can also determine whether a variance of the respective depth values of the second lower part of the multiple number of points within a BE2018 / 5903 is predetermined depth-variance threshold, the lying of the variance within the predetermined depth-variance threshold being an indicator that the three-dimensional formation is substantially flat. For FIGs. 7A and 7B, SAC segmentation analysis results in the first wall 302 being defined by a plane with the equation: 0.999536% + 0.00416241y + 0.0301683z = -1.15774 (1) and the second wall 306 is defined by a plane with the equation: 0.999762% - 0.0210643y + 0.00561335z = 1.35227 (2) where both comparisons were generated for use with x, y, z coordinates measured in meters. In obtaining the comparisons for planes defining the first and the second wall, the method includes at step 204 the processing of solving the equation for a first coordinate value and at step 206 the method comprises processing of solving the equation for a second coordinate value where the first coordinate value and the second coordinate value are for the same type of coordinate. Preferably, the first coordinate value and the second coordinate value are a width or a height coordinate. In the example of FIGs. 7A and 7B includes solving equations (1) and (2) for the width coordinate x selecting y and z values and stopping these values in equations (1) and (2) to resolve for x. For equation (1), selecting y = 1 and z = 1, for example, results in x = 1.192624088 meters. Using the same y and z values in equation (2) results in x = 1.368046545 meters. It is noted that when resolving for a coordinate value all possible values can be selected for the other two values (i.e. if resolved for x, any value can be selected for y and z). However, in a preferred embodiment, values representative of coordinates that are close to the image pickup device should be BE2018 / 5903 selected (for example within 3 meters from the image recording device along the z direction and within 3 meters from the image recording device along the y direction). This reduces the risk that distortion created by the optics within the image pickup device or perspective distortion will have a sufficiently significant effect on the calculations. After having resolved for the first and the second coordinate value, the method at step 208 comprises processing of calculating a first distance at least in part based on the first coordinate value and the second coordinate value. With regard to the examples of FIGs. 7A and 7B and equations (1) and (2) above, each of the calculated first and second coordinate values will represent a point on the respective plane that is located at the same height and depth in the 3D space. Thus, taking the absolute value of the difference between the first and second coordinate values (2,560,670,633 m) can serve to provide the desired calculation of a distance between the two planes at the given coordinates. In the case of a trailer, as used in the example of FIGs. 7A and 7B, this distance can be used as a global width dimension, since the trailer comprises two parallel and opposite side walls that remain the same distance apart over the entire depth of the trailer. The method of dimensioning FIG. 5 can be particularly advantageous when used in dimensioning inner areas of cargo-carrying containers such as trailers, shipping containers, delivery and delivery trucks, and so on. This is mainly due to the fact that such containers typically have a rectangularly constructed interior that allows a calculated dimension (height, width, and / or depth) to apply to the same extent throughout the container. Nevertheless, in some cases, container is irregular because there are times when a portion of a wall will be displaced with respect to a portion that forms the remainder of that wall. This is occasionally seen BE2018 / 5903 for delivery trucks, for example, where the cargo area will not only touch the rear of the driver's cab but will also partially extend over the top thereof. However, the principles of this description are equally applicable to those cases where a dimensional representation over a multiple number of points is made possible. Dimensional calculation over multiple points can also be used to more accurately determine the desired distance between two planes. Referring to equations (1) and (2), the equations can be solved, for example, for x a n number of times using n pairs of y, z coordinate values. The results can then be averaged in an effort to obtain a more accurate calculation. In addition, it may be desirable to perform image distortion correction after the 3D image has been recorded by the image recording device. Although the method of FIG. 6 can be successfully used in a setting such as cargo-carrying container to calculate both width and height, in some cases height measurement calculation can be at least difficult to obtain accurately. This can occur in cases where the image pickup device is mounted fairly close to one of the walls (such as the CMU that is attached near the ceiling of the container) so that it is unable to effectively record 3D data that has an accurate allows detection of a plane and thus makes an accurate calculation of a comparison that would define such a plane impossible. In this case, an adapted approach can be used to determine the height of the container. An example of such an approach is provided in FIG. 8 which shows a flow chart representative of a method for dimensioning a height of a space bounded by at least one floor, a first raised wall and a second raised wall, the space being three-dimensional and definable via height, width and depth coordinates. Observed BE2018 / 5903, the same hardware and / or analysis can be applied to the method of FIG. 8 as applied to the method of FIG. 6 to obtain and / or calculate various data such as image data and / or area data. The method of FIG. 8 starts with step 400 which comprises obtaining, via, for example, an image recording device, a three-dimensional image of at least a portion of the space, the three-dimensional image comprising three-dimensional point data. Next, at step 402, the method includes analyzing the three-dimensional image to determine a first equation that defines a first plane corresponding to the floor. The comparison can be expressed as Ax + By + Cz = D. As described above, SAC segmentation analysis can be used to obtain the correct comparison. Next, at step 404, the method includes the operation of solving the first equation for a first height coordinate value (e.g., solving for y in the case that an equation Ax + By + Cz = D is used by selecting x and z values corresponding to distances that are no more than 3 meters away from the image capture device used to obtain the three-dimensional image along the respective axis). Next, the method at steps 406 and 408 includes the operations of determining a second and third height coordinate value associated with the tops of the first and second upstanding walls, respectively. Since the determination of the coordinate values is driven by the three-dimensional point data in the 3D image and may include data noise, it is not always preferable to simply select a point on a wall surface with the highest height coordinate value. Instead, when detecting a certain plane corresponding to a wall filter (s), it can be applied to the set of points that represent the wall and the plane to remove points that might correspond to BE2018 / 5903 for example noise. This may include, for example, an edge detection filter that could more clearly define the top edge of the wall, allowing for a selection of a highest height coordinate value within the filtered point set that is representative of the highest point on the wall. Accordingly, the selected height coordinate value can be selected from a plurality of largest height coordinate values associated with the respective upright wall. Finally, at step 410, the operation of calculating a height distance can be performed. Since in practice there are cases where one side wall of a container is a little lower than the other or if due to distortion one side wall seems a little larger than the opposite side wall, the distance calculation of step 410 uses the lowest of the second height. coordinate value and the third height coordinate value. Thus, the final distance calculation can be performed by solving for an absolute value the difference between the first height coordinate value and the lowest of the second height coordinate value and the third height coordinate value. This distance may be representative of the height of a space such as a cargo-carrying container, particularly where 3D data representative of the ceiling cannot be accurately recorded or filtered away for various reasons. Although the methods described above may be particularly advantageous in dimension detection of height and width parameters of a space bounded by opposite walls (including floor and ceiling), additional / separate methods may be required to determine the depth of space such as a cargo-carrying container . The need for this approach may be necessary due to the fact that certain walls of a container are outside the depth detection range of a 3D image capture device. With reference to FIG. 1 can BE2018 / 5903 for example the inner portion of the front wall 102.2 of container 102 cannot be detected by the CMU 112 if the length of the container exceeds the maximum depth detection range of the CMU. The exemplary method provided in FIG. 9 attempts to resolve this issue. Specifically, FIG. 9 is a flow chart representative of a method for dimensioning a depth of a space bounded by at least one floor, a first raised wall, a second raised wall opposite and parallel to the first raised wall and a third raised wall normally on the first upstanding wall and the second upstanding wall, the space being three-dimensional and definable via height, width and depth coordinates. As with previous methods, the same hardware and / or analysis can be applied to the method of FIG. 9. At step 500, the method comprises the operation of obtaining a three-dimensional image of at least a portion of the space, the three-dimensional image comprising points with three-dimensional point data. Next, the method at step 502 comprises obtaining a two-dimensional image of the at least one portion of the space, the two-dimensional image comprising pixels with pixel data, at least some of the dots corresponding to some of the pixels. Finally, the method at step 504 includes the operation of performing dimensional analysis on a filtered portion of the two-dimensional image, the filtered portion including at least some of the pixels not corresponding to at least some of the dots. Referring back to the example of FIGs. 7A and 7B, it can be seen that the 3D image data is visually represented in FIG. 7B due to the connection of points, points (and therefore points data) missing corresponding to the far end of the container. Thus, in performing steps 500 and 502, one can use the images of FIGs. 7A and 7B, filter the 2D image through it BE2018 / 5903 superimpose three-dimensional image over the two-dimensional image and retain only 2D pixel data for areas that do not have 3D point data. The result of such a filtration is shown in FIG. 10. Once the filtered 2D image has been obtained, further image analysis can be performed on it to determine the depth. The filtered image can be compared, for example, with a multiple number of previously existing images of containers, each having a known depth. An agreement between the images will provide an agreement with a known depth. In other examples, the filtered 2D image can be combined with 3D point data to make depth calculations. For example, since walls 302 and 306 can be detected, their size can be projected in 3D space for a certain distance. Although this distance is only unknown, it can be seen that the wall 102.2 is visible in the 2D image of FIG. 7A. By covering 3D data with extended walls with the filtered 2D data, the rear wall 102.2 can be placed in the path of the two walls 302 and 306. Having this positional information allows a selection of one or more z coordinate values along the wall 302 and / or 306 projections that first intersect the wall 102.2. This value can then serve as the depth value of the container. Specific embodiments have been described in the foregoing description. Various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be considered in an illustrative manner rather than a restrictive manner and all such modifications are intended to be included within the scope of the present teachings. Moreover, the described embodiments / examples / implementations should not be interpreted as mutually exclusive, but instead as BE2018 / 5903 possibly combinable if such combinations are possible in any way. In other words, any described measure in one of the above embodiments / examples / implementations can be included in another embodiment / example / implementation. In addition, none of the steps of a method described herein have a specific sequence unless it is expressly stated that no other sequence is possible or necessary for the remaining steps of the method. The benefits, solutions to problems, and any element that can lead to any benefit or solution that occurs or is more emphatically present should not be interpreted as critical, required, or essential measures or elements of any or all of the claims. The invention is only defined by the appended claims including adjustments made during the grant phase of this application and all equivalents of these claims as described. For the sake of clarity and a brief description, features are described herein as part of the same or separate embodiments. It is noted, however, that the scope of the invention may include embodiments that include combinations of all or some of the features described. It is further noted that the embodiments shown have the same or similar components, unless they are described as being different. In addition, in this document, relative terms such as first and second, top and bottom and the like can only be used to distinguish one entity or action from another entity or action without necessarily requiring or requiring a genuine such relationship or order between such entities or action. imply. The terms "includes," "includes," "has," "has," "contains," "contains," or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or device that includes a list of elements includes, not only these elements, but BE2018 / 5903 may include other elements that are not explicitly listed or inherent in such a process, method, article, or device. An element preceded by "includes ... a", "has ... a", "contains ... a" does not exclude, without limitation, the existence of additional identical elements in the process, method, article , or the device comprising the element. The term "one" is defined as one or more, unless explicitly stated otherwise herein. The terms "substantially", "essential", "approximately", or any other version thereof, are defined as being close by, as understood by those skilled in the art, and in a non-limiting embodiment, the term is defined as being within 10%, in another embodiment as being within 5%, in another embodiment as being within 1% and in another embodiment as being within 0.5%. The term "coupled" as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is "configured" in a certain way is configured in at least that way, but can also be configured in ways that are not specified. Some embodiments may include one or more generic or specialized processors (or "processing devices"), such as microprocessors, digital signal processors, customized processors, and field-programmable gate arrays (FPGAs) and uniquely stored program instructions (including both software and hardware) that include one or more control multiple processors, in combination with certain non-processor circuits, to implement some, most, or all functions of the method and / or device described herein. Alternatively, some or all of the functions could be implemented by a state machine that has no stored program instructions, or in one or more application-specific integrated circuits (ASICs), in which each function or some combinations of certain functions are implemented as logic on BE2018 / 5903 size. A combination of the two approaches could of course be used. In addition, an embodiment may be implemented as a computer-readable storage medium having a computer-readable code stored thereon for programming a computer (e.g., including a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage media include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (read-only memory), a PROM (programmable read-only memory) ), an EPROM (erasable programmable read-only memory), an EEPROM (electrically erasable programmable read-only memory) and a flash memory. Those skilled in the art will, despite potentially significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles described herein will easily be able to generate such software. instructions and programs and ICs with minimal experimentation. The summary is provided to enable the reader to quickly find out the nature of the technical description. This is submitted on the assumption that it will not be used to interpret the claims or to limit their scope. In addition, in the foregoing figure description, various measures may be grouped together in various embodiments for the purpose of streamlining the description. This manner of description should not be interpreted as a representation of an intention that the claimed embodiments require more measures than explicitly mentioned in each claim. Rather, as the following claims show, the inventive subject matter is in less than all of the features of a described embodiment. Thus become the below BE2018 / 5903 the following claims added to the figure description, wherein each claim stands on its own as separately claimed matter. The mere fact that certain measures are named in mutually different claims does not indicate that a combination of these measures cannot be used for an advantage. A large number of variants will be clear to the skilled person. All variants are considered to be included within the scope of the invention as defined in the following claims.
权利要求:
Claims (23) [1] Method for dimensioning a space bounded by at least a first wall and a second wall opposite the first wall, the space being three-dimensional and definable via height, width and depth coordinates, the method comprising: acquiring, by an image recording device, a three-dimensional image of at least a portion of the space; analyzing the three-dimensional image to determine a first equation that defines a first plane corresponding to the first wall and to determine a second equation that defines a second plane corresponding to the second wall; solving the first equation for a first coordinate value; solving the second equation for a second coordinate value, wherein the first coordinate value and the second coordinate value are one of a width coordinate or a height coordinate; and calculating a first distance at least in part based on the first coordinate value and the second coordinate value. [2] The method of claim 1, wherein the first wall is parallel to the second wall. [3] The method of claim 1 or 2, wherein the processing of analyzing the three-dimensional image comprises performing a sample consensus (SAC) segmentation analysis. [4] A method according to any one of the preceding claims, wherein the first comparison is expressed as Ax + By + Cz = D, the second comparison is expressed as Ex + Fy + Gz = H, and BE2018 / 5903 where x corresponds to a width coordinate value, y corresponds to a height coordinate value and z corresponds to a depth coordinate value. [5] The method of claim 4, wherein the processing of solving the first equation and the processing of solving the second equation comprises selecting a z value corresponding to a distance not greater than 3 meters from an image pickup device that is used to obtain the three-dimensional image along a respective axis. [6] The method of any one of the preceding claims, wherein the operation of calculating the first distance comprises subtracting the first coordinate value from the second coordinate value. [7] Method according to one of the preceding claims, wherein the three-dimensional image comprises a point cloud. [8] The method of any one of the preceding claims, further comprising correcting for distortion caused by an image recording device used to obtain the three-dimensional image. [9] The method of any one of the preceding claims, further comprising: solving the first equation for a third coordinate value; solving the second equation for a fourth coordinate value, wherein the third coordinate value and the fourth coordinate value are width coordinate or height coordinate as the first coordinate value and the second coordinate value; calculating a second distance between the first wall and the second wall at least in part based on the third coordinate value and the fourth coordinate value; and calculating a third distance at least in part based on the first distance and the second distance. BE2018 / 5903 [10] A method according to any one of the preceding claims, wherein the first distance corresponds to a separation between the first wall and the second wall. [11] A method according to any one of the preceding claims, wherein the space is an inner space of a container that has an opening. [12] The method of claim 11, wherein the operation of obtaining the three-dimensional image comprises attaching a three-dimensional image recording device near the aperture. [13] A method for dimensioning a height of a space bounded by at least one floor, a first raised wall and a second raised wall, the space being three-dimensional and definable via height, width and depth coordinates, comprising the method : obtaining a three-dimensional image of at least a portion of the space, the three-dimensional image comprising three-dimensional point data; analyzing the three-dimensional image to determine a first equation that defines a first plane corresponding to the floor; solving the first equation for a first height coordinate value; determining a second height coordinate value selected from a first multiple number of largest height coordinate values associated with the first raised wall; determining a third height coordinate value selected from a second multiple number of largest height coordinate values associated with the second raised wall; and calculating a first distance at least in part based on the first height coordinate value and the lowest of the second height coordinate value and the third height coordinate value. BE2018 / 5903 [14] The method of claim 13, wherein the first equation is expressed as Ax + By + Cz = D, and wherein the processing of solving the first equation solves for y by selecting an x and a z value corresponding to the respective distances no more than 3 meters from an image pickup device used to obtain the three-dimensional image along the respective axis. [15] The method of claims 13 or 14, wherein the first wall is opposite and parallel to the second wall. [16] A method according to any one of the preceding claims 13-15, wherein the space is further limited by a ceiling and wherein the three-dimensional point data omits data associated with the ceiling. [17] The method of any one of the preceding claims 13-16, further comprising applying a noise filter to the three-dimensional image prior to the operations of determining the second height coordinate value and determining the third height coordinate value. [18] The method of any one of the preceding claims 13-17, wherein the processing of analyzing the three-dimensional image comprises performing sample consensus (SAC) segmentation analysis. [19] A method for dimensioning a depth of a space bounded by at least one floor, a first raised wall, a second raised wall opposite and parallel to the first raised wall, and a third raised wall normally to the first raised wall and the second upright wall, the space being three-dimensional and definable via height, width and depth coordinates, the method comprising: obtaining a three-dimensional image of at least a portion of the space, the three-dimensional image comprising points with three-dimensional point data; obtaining a two-dimensional image of the at least one portion of the space, wherein the two-dimensional image is pixels BE2018 / 5903 with pixel data, wherein at least some of the points correspond to some of the pixels; and performing dimensional analysis on a filtered portion of the two-dimensional image, wherein the filtered portion includes at least some of the pixels that do not correspond to the at least some of the points. [20] The method of claim 19, further comprising superimposing the three-dimensional image over the two-dimensional image to obtain the filtered portion of the two-dimensional image, wherein the filtered portion of the two-dimensional image is a section of the two-dimensional image over which points of the three-dimensional image have been laid. [21] A method according to claim 19 or 20, wherein the operation of performing dimensional analysis comprises comparing the filtered portion of the two-dimensional image with a plurality of previously existing images of spaces each having a known depth. [22] A method according to any one of the preceding claims 19-21, wherein the space is an inner space of a container that has an opening and wherein the operations of obtaining the three-dimensional image and obtaining the two-dimensional image comprise confirming an image recording device near the aperture, wherein the image-recording device is adapted to take the three-dimensional image and the two-dimensional image from a substantially the same point of view. [23] The method of any one of the preceding claims 19-22, wherein the operation of performing dimensional analysis comprises analyzing the filtered portion of the two-dimensional image with at least some three-dimensional point data of the three-dimensional image.
类似技术:
公开号 | 公开日 | 专利标题 BE1025929B1|2020-01-06|Container usage estimate BE1025931B1|2020-04-01|Systems and methods for determining the filling of a commercial trailer JPWO2016199366A1|2018-04-05|Dimension measuring apparatus and dimension measuring method AU2018391957B2|2021-07-01|Computing package wall density in commercial trailer loading WO2016034022A1|2016-03-10|Vehicle inspection method and system AU2018391965A1|2020-03-26|Container loading/unloading time estimation BE1025930B1|2019-11-27|Container Auto-dimensioning NL2022243B1|2021-09-23|Trailer door monitoring and reporting US10841559B2|2020-11-17|Systems and methods for detecting if package walls are beyond 3D depth camera range in commercial trailer loading US10922830B2|2021-02-16|System and method for detecting a presence or absence of objects in a trailer US11009604B1|2021-05-18|Methods for detecting if a time of flight | sensor is looking into a container US20210248381A1|2021-08-12|Method for Detecting Trailer Status Using Combined 3D Algorithms and 2D Machine Learning Models US20210304414A1|2021-09-30|Methods for unit load device | door tarp detection US20210256682A1|2021-08-19|Three-dimensional | imaging systems and methods for detecting and dimensioning a vehicle storage area US11275964B2|2022-03-15|Methods for determining unit load device | container type using template matching US20210303904A1|2021-09-30|Methods for determining unit load device | container type using template matching JP2021189666A|2021-12-13|Loading space recognition device, loading space recognition method, and loading space recognition program US11125598B1|2021-09-21|Three-dimensional | imaging systems and methods for determining vehicle storage areas and vehicle door statuses US11010915B2|2021-05-18|Three-dimensional | depth imaging systems and methods for dynamic container auto-configuration US20210398308A1|2021-12-23|Methods for calculating real time package density CN113469871A|2021-10-01|Carriage loadable space detection method and device based on three-dimensional laser
同族专利:
公开号 | 公开日 BE1025930A1|2019-08-09| US20190195617A1|2019-06-27| WO2019125669A1|2019-06-27| US10697757B2|2020-06-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20130342653A1|2012-06-20|2013-12-26|Honeywell International Inc.|Cargo sensing| WO2016032600A1|2014-08-29|2016-03-03|Google Inc.|Combination of stereo and structured-light processing| US5632009A|1993-09-17|1997-05-20|Xerox Corporation|Method and system for producing a table image showing indirect data representations| US6957775B2|1995-12-18|2005-10-25|Metrologic Instruments, Inc.|Internet-based method of and system for remotely monitoring, configuring and servicing planar laser illumination and imaging based networks with nodes for supporting object identification and attribute information acquisition functions| US5850352A|1995-03-31|1998-12-15|The Regents Of The University Of California|Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images| US6988660B2|1999-06-07|2006-01-24|Metrologic Instruments, Inc.|Planar laser illumination and imaging based camera system for producing high-resolution 3-D images of moving 3-D objects| US20060147588A1|1997-03-13|2006-07-06|Case Ready Solutions Llc|Products, methods and apparatus for fresh meat processing and packaging| US6006240A|1997-03-31|1999-12-21|Xerox Corporation|Cell identification in table analysis| EP1264281A4|2000-02-25|2007-07-11|Univ New York State Res Found|Apparatus and method for volume processing and rendering| JP4165888B2|2004-01-30|2008-10-15|キヤノン株式会社|Layout control method, layout control apparatus, and layout control program| EP2064535B1|2006-09-18|2016-04-13|Optosecurity Inc.|Method and apparatus for assessing characteristics of liquids| WO2012125726A1|2011-03-14|2012-09-20|Intelligent Technologies International, Inc.|Cargo theft prevention system and method| US8799756B2|2012-09-28|2014-08-05|Interactive Memories, Inc.|Systems and methods for generating autoflow of content based on image and user analysis as well as use case data for a media-based printable product| US8799829B2|2012-09-28|2014-08-05|Interactive Memories, Inc.|Methods and systems for background uploading of media files for improved user experience in production of media-based products| US9233470B1|2013-03-15|2016-01-12|Industrial Perception, Inc.|Determining a virtual representation of an environment by projecting texture patterns| US10127636B2|2013-09-27|2018-11-13|Kofax, Inc.|Content-based detection and three dimensional geometric reconstruction of objects in image and video data|CN113469871A|2020-03-30|2021-10-01|长沙智能驾驶研究院有限公司|Carriage loadable space detection method and device based on three-dimensional laser|
法律状态:
2019-12-16| FG| Patent granted|Effective date: 20191127 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US15/852,962|US10697757B2|2017-12-22|2017-12-22|Container auto-dimensioning| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|